Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ARM CPU experimental kernels from AO to leverage pip install #1458

Merged
merged 29 commits into from
Mar 11, 2025

Conversation

metascroy
Copy link
Contributor

  • torchao experimental CPU kernels are now installed and loaded automatically by pip.
  • Switch quantization to use new quantize_ API

@metascroy metascroy requested a review from Jack-Khuu January 15, 2025 21:14
Copy link

pytorch-bot bot commented Jan 15, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1458

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 8a9a644 with merge base e5cf6e5 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jan 15, 2025
@Jack-Khuu Jack-Khuu added the Quantization Issues related to Quantization or torchao label Jan 15, 2025
Copy link
Contributor

@Jack-Khuu Jack-Khuu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making the pip install work with the subclass APIs!!

@Jack-Khuu Jack-Khuu changed the title update experimental kernels in torchchat Update ARM CPU experimental kernels from AO to leverage pip install Jan 15, 2025
@Jack-Khuu
Copy link
Contributor

cc: @manuelcandales

Can have please for MPS? 🥺🥺 (Separate PR)

@Jack-Khuu
Copy link
Contributor

Awaiting pytorch/executorch#7759

@metascroy
Copy link
Contributor Author

@Jack-Khuu did you update the version AO uses in ET?

@Jack-Khuu
Copy link
Contributor

Yup pytorch/executorch@9836b39 Points to pytorch/ao@11333ba

@nikhil-arm
Copy link

Hello @metascroy @Jack-Khuu , what is the plan to get this in mainline? We would like to use KleidiAI kernels from aten via this quantizer path. Let us know if we need to raise a new PR ?

@Jack-Khuu
Copy link
Contributor

Hi @nikhil-arm, we're still planning to land this

Can you share the specific commit hashes y'all need?

@Jack-Khuu
Copy link
Contributor

@nikhil-arm We've bumped the AO pin on main.
Please let me know if you there's any additional support needed to unblock KleidiAI kernels

@Jack-Khuu
Copy link
Contributor

After a suite of rebases, pinbumps, and splitting up tests up we know what we're tackling:

  • test-torchao-experimental-cpp (macos-14-xlarge): Tests the AOTI runner and likely failing (also in main) due to not linking to the LibOMP from torch as @malfet mentioned in Bump PT 2025131 and ET pins 20250209 #1493.

  • test-torchao-experimental-et (macos-14-xlarge): Tests the ET runner; looks like a install bug where USE_CPP isn't set, but will likely run into the same LibOMP issue as above

@metascroy
Copy link
Contributor Author

Hello @metascroy @Jack-Khuu , what is the plan to get this in mainline? We would like to use KleidiAI kernels from aten via this quantizer path. Let us know if we need to raise a new PR ?

Sorry about the delay @nikhil-arm.

@Jack-Khuu let's try to get this landed within the next week. Bumping the ao pin in torchchat had various conflicts with the CI, but I think we can dedicate to making this work.

I think it does make sense to first land pytorch/ao#1836 in torchao before bumping because they've already deprecated the old quantizers in quantize_.

Copy link
Contributor

@Jack-Khuu Jack-Khuu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-review and things are looking great

Thanks again

@Jack-Khuu Jack-Khuu merged commit 3c7e839 into main Mar 11, 2025
72 checks passed
@Jack-Khuu
Copy link
Contributor

Hi @nikhil-arm The experimental kernels from AO are now in. Can you share a link to the KleidiAI Kernels (either here or a GH issue if you can share more context)?

@metascroy
Copy link
Contributor Author

Hi @nikhil-arm The experimental kernels from AO are now in. Can you share a link to the KleidiAI Kernels (either here or a GH issue if you can share more context)?

For context for both @nikhil-arm and @Jack-Khuu. There are two locations of KleidiAI kernels in PyTorch/torchao now.

PyTorch has a 4-bit quantized linear op backed by KleidiAI. I think @nikhil-arm wants to enable this in torchchat. To do this, you need to pass "aten" to the target in PackedLinearInt8DynamicActivationIntxWeightLayout(target="aten") here: https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L150. I will leave it to @nikhil-arm to put up a PR to pipe this through. This should then work with most surfaces, except ExecuTorch, although I don't know if it's been tested in anything other than eager mode.

We have also pulled in KleidiAI kernels in torchao itself. These should work on all surfaces, including ExecuTorch. To enable these, we need to add the flag TORCHAO_BUILD_KLEIDIAI=1 before the pip install:

USE_CPP=1 $PIP_EXECUTABLE install git+https://github.com/pytorch/ao.git@${TORCHAO_PIN}

When enabled, PackedLinearInt8DynamicActivationIntxWeightLayout() will use the KleidiAI kernel when supported (4-quantization, has_weight_zeros=false), and use our native torchao kernels in other cases. You can see what kernel is being selected at runtime by setting the environment variable TORCH_CPP_LOG_LEVEL=Info. Eventually we want to enable this flag by default, but I have it disabled right now for two reasons: 1) the flag increases the install time because it clones/builds KleidiAI; 2) there is limited perf benefit to enabling it now based on the KleidiAI kernels we have enabled in torchao (GEMV neondot kernels) vs. the native torchao Arm kernels (also GEMV neondot kernels).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot. Quantization Issues related to Quantization or torchao
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants